Holonomic gradient descent and its application to the Fisher–Bingham integral

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Holonomic Gradient Descent and its Application to Fisher-Bingham Integral

The gradient descent is a general method to find a local minimum of a smooth function f(z1, . . . , zd). The method utilizes the observation that f(p) decreases if one goes from a point z = p to a “nice” direction, which is usually −(∇f)(p). As textbooks on optimizations present (see, e.g., [5], [16]), we have a lot of achievements on this method and its variations. We suggest a new variation o...

متن کامل

construction and validation of translation metacognitive strategy questionnaire and its application to translation quality

like any other learning activity, translation is a problem solving activity which involves executing parallel cognitive processes. the ability to think about these higher processes, plan, organize, monitor and evaluate the most influential executive cognitive processes is what flavell (1975) called “metacognition” which encompasses raising awareness of mental processes as well as using effectiv...

1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs

We show empirically that in SGD training of deep neural networks, one can, at no or nearly no loss of accuracy, quantize the gradients aggressively—to but one bit per value—if the quantization error is carried forward across minibatches (error feedback). This size reduction makes it feasible to parallelize SGD through data-parallelism with fast processors like recent GPUs. We implement data-par...

متن کامل

Learning to learn by gradient descent by gradient descent

The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorit...

متن کامل

Learning to Learn without Gradient Descent by Gradient Descent

We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-paramete...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Advances in Applied Mathematics

سال: 2011

ISSN: 0196-8858

DOI: 10.1016/j.aam.2011.03.001